22 research outputs found

    On the Network-Wide Gain of Memory-Assisted Source Coding

    Full text link
    Several studies have identified a significant amount of redundancy in the network traffic. For example, it is demonstrated that there is a great amount of redundancy within the content of a server over time. This redundancy can be leveraged to reduce the network flow by the deployment of memory units in the network. The question that arises is whether or not the deployment of memory can result in a fundamental improvement in the performance of the network. In this paper, we answer this question affirmatively by first establishing the fundamental gains of memory-assisted source compression and then applying the technique to a network. Specifically, we investigate the gain of memory-assisted compression in random network graphs consisted of a single source and several randomly selected memory units. We find a threshold value for the number of memories deployed in a random graph and show that if the number of memories exceeds the threshold we observe network-wide reduction in the traffic.Comment: To appear in 2011 IEEE Information Theory Workshop (ITW 2011

    Infocast: A New Paradigm for Collaborative Content Distribution from Roadside Units to Vehicular Networks Using Rateless Codes

    Full text link
    In this paper, we address the problem of distributing a large amount of bulk data to a sparse vehicular network from roadside infostations, using efficient vehicle-to-vehicle collaboration. Due to the highly dynamic nature of the underlying vehicular network topology, we depart from architectures requiring centralized coordination, reliable MAC scheduling, or global network state knowledge, and instead adopt a distributed paradigm with simple protocols. In other words, we investigate the problem of reliable dissemination from multiple sources when each node in the network shares a limited amount of its resources for cooperating with others. By using \emph{rateless} coding at the Road Side Unit (RSU) and using vehicles as data carriers, we describe an efficient way to achieve reliable dissemination to all nodes (even disconnected clusters in the network). In the nutshell, we explore vehicles as mobile storage devices. We then develop a method to keep the density of the rateless codes packets as a function of distance from the RSU at the desired level set for the target decoding distance. We investigate various tradeoffs involving buffer size, maximum capacity, and the mobility parameter of the vehicles

    Results on the optimal memoryassisted universal compression performance for mixture sources

    Get PDF
    Abstract-In this paper, we consider the compression of a sequence from a mixture of K parametric sources. Each parametric source is represented by a d-dimensional parameter vector that is drawn from Jeffreys' prior. The output of the mixture source is a sequence of length n whose parameter is chosen from one of the K source parameter vectors uniformly at random. We are interested in the scenario in which the encoder and the decoder have a common side information of T sequences generated independently by the mixture source (which we refer to as memory-assisted universal compression problem). We derive the minimum average redundancy of the memoryassisted universal compression of a new random sequence from the mixture source and prove that when for some Ç« > 0, the side information provided by the previous sequences results in significant improvement over the universal compression without side information that is a function of n, T , and d. On the other hand, as K grows, the impact of the side information becomes negligible. Specifically, when for some Ç« > 0, optimal memory-assisted universal compression almost surely offers negligible improvement over the universal compression without side information
    corecore